# Mathematical Problem Solving

Deepseekmath 7B MathFusion
Apache-2.0
MathFusionQA is a mathematical problem-solving model based on deepseek-math-7b-base, enhancing the mathematical problem-solving capabilities of large language models through instruction fusion.
Large Language Model Transformers English
D
QizhiPei
14
1
Qwen2.5 1.5B Pedagogical Rewardmodel
This model is a reward model trained on the MathDial and MRBench datasets, specializing in mathematics education assistance and scaffolded learning.
Large Language Model Transformers
Q
eth-nlped
5,332
2
Doge 160M Reason Distill
Apache-2.0
Doge 160M Reasoning Distilled Version is a lightweight language model based on dynamic masked attention mechanism and cross-domain mixture of experts, focusing on reasoning and question-answering tasks.
Large Language Model Transformers English
D
SmallDoge
26
4
Open Reasoner Zero 7B
MIT
Open Reasoner Zero is an open-source solution for large-scale reinforcement learning based on foundational models, focusing on scalability, simplicity, and ease of use for large-scale reasoning-oriented reinforcement learning.
Large Language Model Transformers
O
Open-Reasoner-Zero
776
28
Openmath2 Llama3.1 70B
OpenMath2-Llama3.1-70B is a specialized large language model for mathematics, fine-tuned using the OpenMathInstruct-2 dataset based on the Llama3.1-70B-Base model.
Large Language Model Transformers English
O
nvidia
923
20
Llama 3 Smaug 8B
An optimized model based on Meta Llama 3, enhancing performance for multi-turn dialogue scenarios
Large Language Model Transformers
L
abacusai
8,943
89
Herobophades 3x7B
Apache-2.0
HeroBophades-3x7B is an experimental Mixture of Experts (LLM) model built using mergekit, designed to run in 4-bit mode on GPUs with 12GB VRAM.
Large Language Model Transformers
H
nbeerbower
20
3
Leeroodedicated Math 7b
This model is constructed through expert collaboration methods, specializing in mathematical problem solving, capable of autonomously generating solutions or invoking GPT-4 level large models when necessary.
Large Language Model Transformers
L
leeroo
63
6
E.star.7.b
Apache-2.0
A 7B-parameter large language model based on the Mistral architecture, efficiently trained using Unsloth and TRL libraries, demonstrating excellent performance in multiple benchmarks.
Large Language Model Transformers English
E
liminerity
86
2
Theprofessor 155b
TheProfessor is a hybrid model created by merging multiple pre-trained language models using the mergekit tool, specializing in conversational interaction, logical reasoning, scientific research, medical knowledge, and mathematical capabilities.
Large Language Model Transformers
T
abacusai
17
96
Llama 2 7b Hf 4bit 64rank
MIT
The LoftQ (LoRA Fine-tuning Aware Quantization) model provides a quantized backbone network and LoRA adapters, specifically designed for LoRA fine-tuning to improve the fine-tuning performance and efficiency of large language models during the quantization process.
Large Language Model Transformers English
L
LoftQ
1,754
2
Parallel 7B
Apache-2.0
MathOctopus is a multilingual mathematical reasoning large language model based on the LLaMA 2 architecture, supporting 10 languages and specializing in solving mathematical problems.
Large Language Model Transformers Supports Multiple Languages
P
Mathoctopus
14
2
Mathcoder CL 7B
Apache-2.0
The MathCoder series of open-source large language models, specifically designed for general mathematical problem solving, fine-tuned based on Llama-2
Large Language Model Transformers English
M
MathLLMs
74
18
Mathcoder L 7B
Apache-2.0
The MathCoder series of open-source large language models, specifically tailored for general mathematical problem solving, fine-tuned based on Llama-2 and Code Llama.
Large Language Model Transformers English
M
MathLLMs
127
18
Metamath 7B V1.0
MetaMath-Llemma-7B is a mathematical reasoning model fine-tuned on the MetaMathQA dataset, demonstrating excellent performance on GSM8K and MATH datasets.
Large Language Model Transformers
M
meta-math
278
27
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase